Stability Enhanced Large-Margin Classifier Selection
نویسندگان
چکیده
منابع مشابه
Large Margin Classifier via Semiparametric Inference
In this paper, we construct a learning method of stochastic perceptron based on semiparametric inference, and show that this method produces large margin solutions. In semiparametric inference, the parameters are divided into structural parameters which are to be estimated and nuisance parameters in which we do not have any interest. Here, the weight vector of perceptron is de ned as structural...
متن کاملLarge-margin feature selection for monotonic classification
Monotonic classification plays an important role in the field of decision analysis, where decision values are ordered and the samples with better feature values should not be classified into a worse class. The monotonic classification tasks seem conceptually simple, but difficult to utilize and explain the order structure in practice. In this work, we discuss the issue of feature selection unde...
متن کاملLarge Margin Subspace Learning for feature selection
Recent research has shown the benefits of large margin framework for feature selection. In this paper, we propose a novel feature selection algorithm, termed as Large Margin Subspace Learning (LMSL), which seeks a projection matrix to maximize the margin of a given sample, defined as the distance between the nearest missing (the nearest neighbor with the different label) and the nearest hit (th...
متن کاملSP-SVM: Large Margin Classifier for Data on Multiple Manifolds
As one of the most important state-of-the-art classification techniques, Support Vector Machine (SVM) has been widely adopted in many real-world applications, such as object detection, face recognition, text categorization, etc., due to its competitive practical performance and elegant theoretical interpretation. However, it treats all samples independently, and ignores the fact that, in many r...
متن کاملClass Conditional Nearest Neighbor and Large Margin Instance Selection
The one nearest neighbor (1-NN) rule uses instance proximity followed by class labeling information for classifying new instances. This paper presents a framework for studying properties of the training set related to proximity and labeling information, in order to improve the performance of the 1-NN rule. To this aim, a so-called class conditional nearest neighbor (c.c.n.n.) relation is introd...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Statistica Sinica
سال: 2018
ISSN: 1017-0405
DOI: 10.5705/ss.202016.0260